127 research outputs found

    Maximizing decision rate in multisensory integration

    Get PDF
    Effective decision-making in an uncertain world requires making use of all available information, even if distributed across different sensory modalities, as well as trading off the speed of a decision with its accuracy. In tasks with a fixed stimulus presentation time, animal and human subjects have previously been shown to combine information from several modalities in a statistically optimal manner. Furthermore, for easily discriminable stimuli and under the assumption that reaction times result from a race-to-threshold mechanism, multimodal reaction times are typically faster than predicted from unimodal conditions when assuming independent (parallel) races for each modality. However, due to a lack of adequate ideal observer models, it has remained unclear whether subjects perform optimal cue combination when they are allowed to choose their response times freely.
Based on data collected from human subjects performing a visual/vestibular heading discrimination task, we show that the subjects exhibit worse discrimination performance in the multimodal condition than predicted by standard cue combination criteria, which relate multimodal discrimination performance to sensitivity in the unimodal conditions. Furthermore, multimodal reaction times are slower than those predicted by a parallel race model, opposite to what is commonly observed for easily discriminable stimuli.
Despite violating the standard criteria for optimal cue combination, we show that subjects still accumulate evidence optimally across time and cues, even when the strength of the evidence varies with time. Additionally, subjects adjust their decision bounds, controlling the trade-off between speed and accuracy of a decision, such that they feature correct decision rates close to the maximum achievable value

    Does Neuronal Synchrony Underlie Visual Feature Grouping?

    Get PDF
    SummaryPrevious research suggests that synchronous neural activity underlies perceptual grouping of visual image features. The generality of this mechanism is unclear, however, as previous studies have focused on pairs of neurons with overlapping or collinear receptive fields. By sampling more broadly and employing stimuli that contain partially occluded objects, we have conducted a more incisive test of the binding by synchrony hypothesis in area MT. We find that synchrony in spiking activity shows little dependence on feature grouping, whereas gamma band synchrony in field potentials can be significantly stronger when features are grouped. However, these changes in gamma band synchrony are small relative to the variability of synchrony across recording sites and do not provide a robust population signal for feature grouping. Moreover, these effects are reduced when stimulus differences nearby the receptive fields are eliminated using partial occlusion. Our findings suggest that synchrony does not constitute a general mechanism of visual feature binding

    Parallel input channels to mouse primary visual cortex

    Get PDF
    It is generally accepted that in mammals visual information is sent to the brain along functionally specialized parallel pathways, but whether the mouse visual system uses similar processing strategies is not known. It is important to resolve this issue because the mouse brain provides a tractable system for developing a cellular and molecular understanding of disorders affecting spatiotemporal visual processing. We have used single unit recordings in mouse primary visual cortex to study whether individual neurons are more sensitive to one set of sensory cues than another. Our quantitative analyses show that neurons with short response latencies have low spatial acuity and high sensitivity to contrast, temporal frequency and speed, whereas neurons with long latencies have high spatial acuity, low sensitivities to contrast, temporal frequency and speed. These correlations suggest that neurons in mouse V1 receive inputs from a weighted combination of parallel afferent pathways with distinct spatiotemporal sensitivities

    Perceptual “Read-Out” of Conjoined Direction and Disparity Maps in Extrastriate Area MT

    Get PDF
    Cortical neurons are frequently tuned to several stimulus dimensions, and many cortical areas contain intercalated maps of multiple variables. Relatively little is known about how information is “read out” of these multidimensional maps. For example, how does an organism extract information relevant to the task at hand from neurons that are also tuned to other, irrelevant stimulus dimensions? We addressed this question by employing microstimulation techniques to examine the contribution of disparity-tuned neurons in the middle temporal (MT) visual area to performance on a direction discrimination task. Most MT neurons are tuned to both binocular disparity and the direction of stimulus motion, and MT contains topographic maps of both parameters. We assessed the effect of microstimulation on direction judgments after first characterizing the disparity tuning of each stimulation site. Although the disparity of the stimulus was irrelevant to the required task, we found that microstimulation effects were strongly modulated by the disparity tuning of the stimulated neurons. For two of three monkeys, microstimulation of nondisparity-selective sites produced large biases in direction judgments, whereas stimulation of disparity-selective sites had little or no effect. The binocular disparity was optimized for each stimulation site, and our result could not be explained by variations in direction tuning, response strength, or any other tuning property that we examined. When microstimulation of a disparity-tuned site did affect direction judgments, the effects tended to be stronger at the preferred disparity of a stimulation site than at the nonpreferred disparity, indicating that monkeys can selectively monitor direction columns that are best tuned to an appropriate conjunction of parameters. We conclude that the contribution of neurons to behavior can depend strongly upon tuning to stimulus dimensions that appear to be irrelevant to the current task, and we suggest that these findings are best explained in terms of the strategy used by animals to perform the task

    Vestibular heading discrimination and sensitivity to linear acceleration in head and world coordinates

    Get PDF
    Effective navigation and locomotion depend critically on an observer\u27s ability to judge direction of linear self-motion, i.e., heading. The vestibular cue to heading is the direction of inertial acceleration that accompanies transient linear movements. This cue is transduced by the otolith organs. The otoliths also respond to gravitational acceleration, so vestibular heading discrimination could depend on (1) the direction of movement in head coordinates (i.e., relative to the otoliths), (2) the direction of movement in world coordinates (i.e., relative to gravity), or (3) body orientation (i.e., the direction of gravity relative to the otoliths). To quantify these effects, we measured vestibular and visual discrimination of heading along azimuth and elevation dimensions with observers oriented both upright and side-down relative to gravity. We compared vestibular heading thresholds with corresponding measurements of sensitivity to linear motion along lateral and vertical axes of the head (coarse direction discrimination and amplitude discrimination). Neither heading nor coarse direction thresholds depended on movement direction in world coordinates, demonstrating that the nervous system compensates for gravity. Instead, they depended similarly on movement direction in head coordinates (better performance in the horizontal plane) and on body orientation (better performance in the upright orientation). Heading thresholds were correlated with, but significantly larger than, predictions based on sensitivity in the coarse discrimination task. Simulations of a neuron/anti-neuron pair with idealized cosine-tuning properties show that heading thresholds larger than those predicted from coarse direction discrimination could be accounted for by an amplitude-response nonlinearity in the neural representation of inertial motion

    Neural correlates of prior expectations of motion in the lateral intraparietal and middle temporal areas

    Get PDF
    Successful decision-making involves combining observations of the external world with prior knowledge. Recent studies suggest that neural activity in macaque lateral intraparietal area (LIP) provides a useful window into this process. This study examines how rapidly changing prior knowledge about an upcoming sensory stimulus influences the computations that convert sensory signals into plans for action. Two monkeys performed a cued direction discrimination task, in which an arrow cue presented at the start of each trial communicated the prior probability of the direction of stimulus motion. We hypothesized that the cue would either shift the initial level of LIP activity before sensory evidence arrives, or it would scale sensory responses according to the prior probability of each stimulus, manifesting as a change in slope of LIP firing rates. Neural recordings demonstrated a clear shift in the activity level of LIP neurons following the arrow cue, which persisted into the presentation of the motion stimulus. No significant change in slope of responses was observed, suggesting that sensory gain was not strongly modulated. To confirm the latter observation, MT neurons were recorded during a version of the cued direction discrimination task, and we found no change in MT responses resulting from the presentation of the directional cue. These results suggest that information about an immediately upcoming stimulus does not scale the sensory response, but rather changes the amount of evidence that must be accumulated to reach a decision in areas that are involved in planning action

    A comparison of vestibular spatiotemporal tuning in macaque parietoinsular vestibular cortex, ventral intraparietal area, and medial superior temporal area

    Get PDF
    Vestibular responses have been reported in the parieto-insular vestibular cortex (PIVC), the ventral intraparietal area (VIP) and the dorsal medial superior temporal area (MSTd) of macaques. However, differences between areas remain largely unknown and it is not clear whether there is a hierarchy in cortical vestibular processing. We examine the spatiotemporal characteristics of macaque vestibular responses to translational motion stimuli using both empirical and model-based analyses. Temporal dynamics of direction selectivity were similar across areas, although there was a gradual shift in the time of peak directional tuning, with responses in MSTd typically being delayed by 100–150 ms relative to responses in PIVC (VIP was intermediate). Responses as a function of both stimulus direction and time were fit with a spatiotemporal model consisting of separable spatial and temporal response profiles. Temporal responses were characterized by a Gaussian function of velocity, a weighted sum of velocity and acceleration, or a weighted sum of velocity, acceleration, and position. Velocity and acceleration components contributed most to response dynamics, with a gradual shift from acceleration dominance in PIVC to velocity dominance in MSTd. The position component contributed little to temporal responses overall, but was substantially larger in MSTd than PIVC or VIP. The overall temporal delay in model fits also increased substantially from PIVC to VIP to MSTd. This gradual transformation of temporal responses suggests a hierarchy in cortical vestibular processing, with PIVC being most proximal to the vestibular periphery and MSTd being most distal

    Representation of vestibular and visual cues to self-motion in ventral intraparietal cortex

    Get PDF
    Convergence of vestibular and visual motion information is important for self-motion perception. One cortical area that combines vestibular and optic flow signals is the ventral intraparietal area (VIP). We characterized unisensory and multisensory responses of macaque VIP neurons to translations and rotations in three dimensions. Approximately half of VIP cells show significant directional selectivity in response to optic flow, half show tuning to vestibular stimuli, and one-third show multisensory responses. Visual and vestibular direction preferences of multisensory VIP neurons could be congruent or opposite. When visual and vestibular stimuli were combined, VIP responses could be dominated by either input, unlike medial superior temporal area (MSTd) where optic flow tuning typically dominates or the visual posterior sylvian area (VPS) where vestibular tuning dominates. Optic flow selectivity in VIP was weaker than in MSTd but stronger than in VPS. In contrast, vestibular tuning for translation was strongest in VPS, intermediate in VIP, and weakest in MSTd. To characterize response dynamics, direction-time data were fit with a spatiotemporal model in which temporal responses were modeled as weighted sums of velocity, acceleration, and position components. Vestibular responses in VIP reflected balanced contributions of velocity and acceleration, whereas visual responses were dominated by velocity. Timing of vestibular responses in VIP was significantly faster than in MSTd, whereas timing of optic flow responses did not differ significantly among areas. These findings suggest that VIP may be proximal to MSTd in terms of vestibular processing but hierarchically similar to MSTd in terms of optic flow processing

    Role of visual and non-visual cues in constructing a rotation-invariant representation of heading in parietal cortex

    Get PDF
    As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.00

    Coding of stereoscopic depth information in visual areas V3 and V3A

    Get PDF
    The process of stereoscopic depth perception is thought to begin with the analysis of absolute binocular disparity, the difference in position of corresponding features in the left and right eye images with respect to the points of fixation. Our sensitivity to depth, however, is greater when depth judgments are based on relative disparity, the difference between two absolute disparities, compared to when they are based on absolute disparity. Therefore, the visual system is thought to compute relative disparities for fine depth discrimination. Functional magnetic resonance imaging studies in humans and monkeys have suggested that visual areas V3 and V3A may be specialized for stereoscopic depth processing based on relative disparities. In this study, we measured absolute and relative disparity tuning of neurons in V3 and V3A of alert fixating monkeys and we compared their basic tuning properties with those published previously for other visual areas. We found that neurons in V3 and V3A predominantly encode absolute, not relative, disparities. We also found that basic parameters of disparity tuning in V3 and V3A are similar to those from other extrastriate visual areas. Finally, by comparing single-unit activity with multi-unit activity measured at the same recording site, we demonstrate that neurons with similar disparity selectivity are clustered in both V3 and V3A. We conclude that areas V3 and V3A are not particularly specialized for processing stereoscopic depth information compared to other early visual areas, at least with respect to the tuning properties that we have examined
    • …
    corecore